108 research outputs found

    Spectral analysis of the high-energy IceCube neutrinos

    Get PDF
    A full energy and flavor-dependent analysis of the three-year high-energy IceCube neutrino events is presented. By means of multidimensional fits, we derive the current preferred values of the high-energy neutrino flavor ratios, the normalization and spectral index of the astrophysical fluxes, and the expected atmospheric background events, including a prompt component. A crucial assumption resides on the choice of the energy interval used for the analyses, which significantly biases the results. When restricting ourselves to the ~30 TeV - 3 PeV energy range, which contains all the observed IceCube events, we find that the inclusion of the spectral information improves the fit to the canonical flavor composition at Earth, (1:1:1), with respect to a single-energy bin analysis. Increasing both the minimum and the maximum deposited energies has dramatic effects on the reconstructed flavor ratios as well as on the spectral index. Imposing a higher threshold of 60 TeV yields a slightly harder spectrum by allowing a larger muon neutrino component, since above this energy most atmospheric tracklike events are effectively removed. Extending the high-energy cutoff to fully cover the Glashow resonance region leads to a softer spectrum and a preference for tau neutrino dominance, as none of the expected electron antineutrino induced showers have been observed so far. The lack of showers at energies above 2 PeV may point to a broken power-law neutrino spectrum. Future data may confirm or falsify whether or not the recently discovered high-energy neutrino fluxes and the long-standing detected cosmic rays have a common origin.Comment: 33 pages, 13 figures. v3: one extra figure (fig. 13), some references updated and some formulae moved to the Appendix. It matches version published in PR

    Design of a realistic PET-CT-MRI phantom

    Get PDF
    The validation of the PET image quality of new PET-MRI systems should be done against the image quality of currently available PET-CT systems. This includes the validation of new attenuation correction methods. Such validation studies should preferentially be done using a phantom. There are currently no phantoms that have a realistic appearance on PET, CT and MRI. In this work we present the design and evaluation of such a phantom. The four most important tissue types for attenuation correction are air, lung, soft tissue and bone. An attenuation correction phantom should therefore contain these four tissue types. As it is difficult to mimic bone and lung on all three modalities using a synthetic material, we propose the use of biological material obtained from cadavers. For the lung section a lobe of a pig lung was used. It was excised and inflated using a ventilator. For the bone section the middle section of a bovine femur was used. Both parts were fixed inside a PMMA cylinder with radius 10 cm. The phantom was filled with 18F-FDG and two hot spheres and one cold sphere were added. First a PET scan was acquired on a PET-CT system. Subsequently, a transmission measurement and a CT acquisition were done on the same system. Afterwards, the phantom was moved to the MRI facility and a UTE-MRI was acquired. Average CT values and MRI R 2 values in bone and lung were calculated to evaluate the realistic appearance of the phantom on both modalities. The PET data was reconstructed with CT-based, transmission-based and MRI-based attenuation correction. The activity in the hot and cold spheres in the images reconstructed using transmission-based and MRI-based attenuation correction was compared to the reconstructed activity using CT-based attenuation correction. The average CT values in lung and bone were -630 HU and 1300 HU respectively. The average R 2 values were 0.7 ms -1 and 1.05 ms -1 respectively. These values are comparable to the values observed in clinical data sets. Transmission-based and MRI-based attenuation correction yielded an average difference with CT- based attenuation correction in the hot spots of -22 % and -8 %. In the cold spot the average differences were +3 % and -8 %. The construction of a PET-CT-MRI phantom was described. The phantom has a realistic appearance on all three modalities. It was used to evaluate two attenuation correction methods for PET-MRI scanners

    Combiner approches statique et dynamique pour modéliser la performance de boucles HPC

    Get PDF
    The complexity of CPUs has increased considerably since their beginnings, introducing mechanisms such as register renaming, out-of-order execution, vectorization,prefetchers and multi-core environments to keep performance rising with each product generation. However, so has the difficulty in making proper use of all these mechanisms, or even evaluating whether one’s program makes good use of a machine,whether users’ needs match a CPU’s design, or, for CPU architects, knowing how each feature really affects customers.This thesis focuses on increasing the observability of potential bottlenecks inHPC computational loops and how they relate to each other in modern microarchitectures.We will first introduce a framework combining CQA and DECAN (respectively static and dynamic analysis tools) to get detailed performance metrics on smallcodelets in various execution scenarios.We will then present PAMDA, a performance analysis methodology leveraging elements obtained from codelet analysis to detect potential performance problems in HPC applications and help resolve them. A work extending the Cape linear model to better cover Sandy Bridge and give it more flexibility for HW/SW codesign purposes will also be described. It will bedirectly used in VP3, a tool evaluating the performance gains vectorizing loops could provide.Finally, we will describe UFS, an approach combining static analysis and cycle accurate simulation to very quickly estimate a loop’s execution time while accounting for out-of-order limitations in modern CPUsLa complexité des CPUs s’est accrue considérablement depuis leurs débuts, introduisant des mécanismes comme le renommage de registres, l’exécution dans le désordre, la vectorisation, les préfetchers et les environnements multi-coeurs pour améliorer les performances avec chaque nouvelle génération de processeurs. Cependant, la difficulté a suivi la même tendance pour ce qui est a) d’utiliser ces mêmes mécanismes à leur plein potentiel, b) d’évaluer si un programme utilise une machine correctement, ou c) de savoir si le design d’un processeur répond bien aux besoins des utilisateurs.Cette thèse porte sur l’amélioration de l’observabilité des facteurs limitants dans les boucles de calcul intensif, ainsi que leurs interactions au sein de microarchitectures modernes.Nous introduirons d’abord un framework combinant CQA et DECAN (des outils d’analyse respectivement statique et dynamique) pour obtenir des métriques détaillées de performance sur des petits codelets et dans divers scénarios d’exécution.Nous présenterons ensuite PAMDA, une méthodologie d’analyse de performance tirant partie de l’analyse de codelets pour détecter d’éventuels problèmes de performance dans des applications de calcul à haute performance et en guider la résolution.Un travail permettant au modèle linéaire Cape de couvrir la microarchitecture Sandy Bridge de façon détaillée sera décrit, lui donnant plus de flexibilité pour effectuer du codesign matériel / logiciel. Il sera mis en pratique dans VP3, un outil évaluant les gains de performance atteignables en vectorisant des boucles.Nous décrirons finalement UFS, une approche combinant analyse statique et simulation au cycle près pour permettre l’estimation rapide du temps d’exécution d’une boucle en prenant en compte certaines des limites de l’exécution en désordre dans des microarchitectures moderne

    Flavor Composition of the High-Energy Neutrino Events in IceCube

    Get PDF
    The IceCube experiment has recently reported the observation of 28 high-energy (> 30 TeV) neutrino events, separated into 21 showers and 7 muon tracks, consistent with an extraterrestrial origin. In this Letter, we compute the compatibility of such an observation with possible combinations of neutrino flavors with relative proportion (alpha(e:)alpha(mu):alpha tau)(circle plus). Although the 7: 21 track-to-shower ratio is naively favored for the canonical (1:1:1)(circle plus) at Earth, this is not true once the atmospheric muon and neutrino backgrounds are properly accounted for. We find that, for an astrophysical neutrino E-2 energy spectrum, (1:1:1)(circle plus). at Earth is disfavored at 81% C. L. If this proportion does not change, 6 more years of data would be needed to exclude (1:1:1)(circle plus) at Earth at 3 sigma C.L. Indeed, with the recently released 3-yr data, that flavor composition is excluded at 92% C. L. The best fit is obtained for (1:0:0)(circle plus). at Earth, which cannot be achieved from any flavor ratio at sources with averaged oscillations during propagation. If confirmed, this result would suggest either a misunderstanding of the expected background events or a misidentification of tracks as showers, or even more compellingly, some exotic physics which deviates from the standard scenario

    Constraints on dark matter annihilation from CMB observations before Planck

    Get PDF
    We compute the bounds on the dark matter (DM) annihilation cross section using the most recent Cosmic Microwave Background measurements from WMAP9, SPT'11 and ACT'10. We consider DM with mass in the MeV-TeV range annihilating 100% into either an e+e- or a mu+mu- pair. We consider a realistic energy deposition model, which includes the dependence on the redshift, DM mass and annihilation channel. We exclude the canonical thermal relic abundance cross section ( = 3 x 10^{-26} cm^3 s^{-1}) for DM masses below 30 GeV and 15 GeV for the e+e- and mu+mu- channels, respectively. A priori, DM annihilating in halos could also modify the reionization history of the Universe at late times. We implement a realistic halo model taken from results of state-of-the-art N-body simulations and consider a mixed reionization mechanism, consisting on reionization from DM as well as from first stars. We find that the constraints on DM annihilation remain unchanged, even when large uncertainties on the halo model parameters are considered.Comment: v4 Corresponds to published versio

    Constraining dark matter late-time energy injection: decays and p-wave annihilations

    Get PDF
    We use the latest cosmic microwave background (CMB) observations to provide updated constraints on the dark matter lifetime as well as on p-wave suppressed annihilation cross sections in the 1 MeV to 1 TeV mass range. In contrast to scenarios with an s-wave dominated annihilation cross section, which mainly affect the CMB close to the last scattering surface, signatures associated with these scenarios essentially appear at low redshifts (z less than or similar to 50) when structure began to form, and thus manifest at lower multipoles in the CMB power spectrum. We use data from Planck, WMAP9, SPT and ACT, as well as Lyman-alpha measurements of the matter temperature at z similar to 4 to set a 95% confidence level lower bound on the dark matter lifetime of similar to 4 x 10(25) s for m(chi) = 100 MeV. This bound becomes lower by an order of magnitude at m(chi) = 1 TeV due to inefficient energy deposition into the inter-galactic medium. We also show that structure formation can enhance the effect of p-wave suppressed annihilation cross sections by many orders of magnitude with respect to the background cosmological rate, although even with this enhancement, CMB constraints are not yet strong enough to reach the thermal relic value of the cross section
    • …
    corecore